159 research outputs found

    Labour shortage in Hungary: legal framework, opportunities and challenges for Vietnamese migrant workers

    Get PDF
    A COVID-19 pandémiát követő időszak gazdaság fellendülést eredményezett, ami a magyar munkaerőpiacon munkaerő hiányt idézett elő. Erre a lényeges problémára az egyik lehetséges megoldást az EU-n kívüli harmadik országból – mint például Vietnám – származó migráns munkavállalók jelenthetik. Ennek jogi alapját teremtette meg az EU és Vietnám között – évekkel korábban – létrejött kölcsönös kereskedelmi megállapodás, valamint a Vietnam és Magyarország között fennálló kölcsönös együttműködési megállapodás. A cikk áttekintést nyújt az EU, Vietnám és Magyarország közötti relációban a migráns munkavállalók jogi helyzetét érintő megállapodások kereteiről és fontosabb tartalmi elemeiről. Ugyancsak elemzi a Magyarországon kialakult munkaerőhiányból eredő lehetőségeket és megoldandó problémákat a potenciális vietnámi migráns munkavállalók számára

    Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking

    Full text link
    This paper proposes a new neural architecture for collaborative ranking with implicit feedback. Our model, LRML (\textit{Latent Relational Metric Learning}) is a novel metric learning approach for recommendation. More specifically, instead of simple push-pull mechanisms between user and item pairs, we propose to learn latent relations that describe each user item interaction. This helps to alleviate the potential geometric inflexibility of existing metric learing approaches. This enables not only better performance but also a greater extent of modeling capability, allowing our model to scale to a larger number of interactions. In order to do so, we employ a augmented memory module and learn to attend over these memory blocks to construct latent relations. The memory-based attention module is controlled by the user-item interaction, making the learned relation vector specific to each user-item pair. Hence, this can be interpreted as learning an exclusive and optimal relational translation for each user-item interaction. The proposed architecture demonstrates the state-of-the-art performance across multiple recommendation benchmarks. LRML outperforms other metric learning models by 6%7.5%6\%-7.5\% in terms of Hits@10 and nDCG@10 on large datasets such as Netflix and MovieLens20M. Moreover, qualitative studies also demonstrate evidence that our proposed model is able to infer and encode explicit sentiment, temporal and attribute information despite being only trained on implicit feedback. As such, this ascertains the ability of LRML to uncover hidden relational structure within implicit datasets.Comment: WWW 201

    Cross Temporal Recurrent Networks for Ranking Question Answer Pairs

    Full text link
    Temporal gates play a significant role in modern recurrent-based neural encoders, enabling fine-grained control over recursive compositional operations over time. In recurrent models such as the long short-term memory (LSTM), temporal gates control the amount of information retained or discarded over time, not only playing an important role in influencing the learned representations but also serving as a protection against vanishing gradients. This paper explores the idea of learning temporal gates for sequence pairs (question and answer), jointly influencing the learned representations in a pairwise manner. In our approach, temporal gates are learned via 1D convolutional layers and then subsequently cross applied across question and answer for joint learning. Empirically, we show that this conceptually simple sharing of temporal gates can lead to competitive performance across multiple benchmarks. Intuitively, what our network achieves can be interpreted as learning representations of question and answer pairs that are aware of what each other is remembering or forgetting, i.e., pairwise temporal gating. Via extensive experiments, we show that our proposed model achieves state-of-the-art performance on two community-based QA datasets and competitive performance on one factoid-based QA dataset.Comment: Accepted to AAAI201

    Seve: Automatic tool for verification of security protocols

    Get PDF
    Master'sMASTER OF SCIENC

    Textual Manifold-based Defense Against Natural Language Adversarial Examples

    Full text link
    Recent studies on adversarial images have shown that they tend to leave the underlying low-dimensional data manifold, making them significantly more challenging for current models to make correct predictions. This so-called off-manifold conjecture has inspired a novel line of defenses against adversarial attacks on images. In this study, we find a similar phenomenon occurs in the contextualized embedding space induced by pretrained language models, in which adversarial texts tend to have their embeddings diverge from the manifold of natural ones. Based on this finding, we propose Textual Manifold-based Defense (TMD), a defense mechanism that projects text embeddings onto an approximated embedding manifold before classification. It reduces the complexity of potential adversarial examples, which ultimately enhances the robustness of the protected model. Through extensive experiments, our method consistently and significantly outperforms previous defenses under various attack settings without trading off clean accuracy. To the best of our knowledge, this is the first NLP defense that leverages the manifold structure against adversarial attacks. Our code is available at \url{https://github.com/dangne/tmd}
    corecore